This is a course about statistics with open source tools: R, RStudio, and so on.
This course is massive!
Every week there will be a submission with a deadline on the following Sunday at 23:55. Late submissions will not be accepted.
After that, three other projects will be assigned for peer review. The deadline for the peer reviews is Wednesday the same week at 23:55.
The exercises and the peer reviews are all that is required for this course.
Here is the link to my GitHub repository.
Load the wrangled data and take a look at it.
learning2014 <- read.table("data/learning2014.tsv", sep = "\t")
str(learning2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
Plot the data.
library(ggplot2)
# initialize plot with data and aesthetic mapping
p1 <- ggplot(learning2014, aes(x = attitude, y = points, col = gender))
# define the visualization type (points)
p2 <- p1 + geom_point()
# add a regression line
p3 <- p2 + geom_smooth(method = "lm")
# add a main title and draw the plot
p4 <- p3 + ggtitle("Student's attitude versus exam points")
# draw the plot
p4
There is a positive correlation between attitude and points. There is no obvious difference between the two genders. Interestingly, there are only two genders.
Now, plot all possible pairs of variables as scatter plots:
pairs(learning2014[-1])
This is not very informative, since the plots are very small and there is no regression line to help us imagine which way the correlation goes.
Drawing more advanced plots:
library(GGally)
library(ggplot2)
# create a more advanced plot matrix with ggpairs()
p <- ggpairs(learning2014, mapping = aes(col = gender, alpha = .3), lower = list(combo = wrap("facethist", bins = 20)))
# draw the plot
p
Age seems to follow a Poisson distribution, probably because the subjects are students. The rest of the variables seem to have a normal distribution, as expected.
Apparently, attitude and points have the highest correlation by far.
In addition, there is a negative correlation between surf and deep.
There is a slight negative correlation between surf and points, surf and age, and surf and stra, as well as a positive correlation between stra and points.
Now, a regression analysis using the three variables that had the highest individual correlation with points:
# create a regression model with three explanatory variables
my_model2 <- lm(points ~ attitude + stra + surf, data = learning2014)
# print out a summary of the model
summary(my_model2)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
There seems to be a statistically significant relationship between the chosen variables. Here are the diagnostic plots:
par(mfrow = c(2,2))
plot(my_model2, which = c(1, 2, 5))
Nothing looks out of the ordinary in these diagnostic plots. There is no reason not to trust the hypothesis that all three chosen variables – attitude, stra, and surf, are explanatory variables for points. The correlation of course is not incredibly strong.
The data was joined by using following columns as surrogate identifiers for students: school, sex, age, address, famsize, Pstatus, Medu, Fedu, Mjob, Fjob, reason, nursery, internet.
Defined two new variables: alc_use and high_use.
alc <- read.csv("data/alc.csv")
str(alc)
## 'data.frame': 382 obs. of 35 variables:
## $ school : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
## $ sex : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
## $ famsize : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
## $ Pstatus : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
## $ Fjob : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
## $ reason : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
## $ nursery : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
## $ internet : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
## $ guardian : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
## $ famsup : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
## $ paid : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
## $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
## $ higher : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
## $ romantic : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
The following four variables were chosen:
goout, sex, studytime, and romantic.
The hypothesis is that going out leads to dringking and males drink more because they are heavier on average. Furthermore, drinking and going out leaves less time for studying. Being in a relationship should reduce alcohol consumption since there is no need to get wasted and meet people.
Explore the variables of interest:
library(tidyr)
library(dplyr)
library(ggplot2)
alc %>% group_by(high_use) %>% summarise(count = n(), mean_goout=mean(goout),
mean_studytime=mean(studytime))
## # A tibble: 2 x 4
## high_use count mean_goout mean_studytime
## <lgl> <int> <dbl> <dbl>
## 1 FALSE 268 2.85 2.15
## 2 TRUE 114 3.72 1.77
alc %>% group_by(high_use, sex) %>% summarise(count = n())
## # A tibble: 4 x 3
## # Groups: high_use [?]
## high_use sex count
## <lgl> <fct> <int>
## 1 FALSE F 156
## 2 FALSE M 112
## 3 TRUE F 42
## 4 TRUE M 72
alc %>% group_by(high_use, romantic) %>% summarise(count = n())
## # A tibble: 4 x 3
## # Groups: high_use [?]
## high_use romantic count
## <lgl> <fct> <int>
## 1 FALSE no 180
## 2 FALSE yes 88
## 3 TRUE no 81
## 4 TRUE yes 33
g_goout <- ggplot(alc, aes(x = goout, fill=high_use)) +
geom_bar() + xlab("Going out with friends") +
ggtitle("Going out with friends from 1 (very low) to 5 (very high) by alcohol use")
g_studytime <- ggplot(alc, aes(x = studytime, fill=high_use)) +
geom_bar() + xlab("Weekly study time") +
ggtitle("Weekly study time [1 (<2 hours), 2 (2 to 5 hours), 3 (5 to 10 hours), or 4 (>10 hours)] by alchol use")
g_sex <- ggplot(alc, aes(x = sex, fill=high_use)) +
geom_bar() +
ggtitle("Sex by alcohol use")
g_romantic <- ggplot(alc, aes(x = romantic, fill=high_use)) +
geom_bar() +
ggtitle("With a romantic relationship (yes/no) by alcohol use")
# Arrange the plots into a grid
library("gridExtra")
grid.arrange(g_goout, g_studytime, g_sex, g_romantic, ncol=2, nrow=2)
In summary, all parts of the hypothesis seem to be correct.
Fit a logistic regression model using high_use as the target variable and goout, studytime, sex, and romantic as explanatory variables.
m <- glm(high_use ~ goout + studytime + sex + romantic, data = alc, family = "binomial")
Variables goout, studytime and sex are associated with alcohol consumption.
High alcohol consumption is associated with going out (as expected).
Males who drink less study more.
Summary of the model:
summary(m)
##
## Call:
## glm(formula = high_use ~ goout + studytime + sex + romantic,
## family = "binomial", data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7365 -0.8114 -0.5009 0.9081 2.6642
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.6988 0.5712 -4.725 2.30e-06 ***
## goout 0.7536 0.1187 6.350 2.15e-10 ***
## studytime -0.4774 0.1683 -2.837 0.00456 **
## sexM 0.6657 0.2585 2.576 0.01000 *
## romanticyes -0.1424 0.2699 -0.528 0.59767
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 393.67 on 377 degrees of freedom
## AIC: 403.67
##
## Number of Fisher Scoring iterations: 4
Coefficients of the model as odds ratios and their confidence intervals:
or <- coef(m) %>% exp
ci <- confint(m) %>% exp
cbind(or, ci)
## or 2.5 % 97.5 %
## (Intercept) 0.06728696 0.02129867 0.2010636
## goout 2.12456419 1.69404697 2.7003422
## studytime 0.62040325 0.44145946 0.8558631
## sexM 1.94589655 1.17538595 3.2443631
## romanticyes 0.86724548 0.50714091 1.4648961
In summary, 1 unit increase in goout is associated with 2.1 increase in likelihood of high alcohol consumption.
1 unit increase in studytime is associated with 0.6 lower likelihood of high alcohol consumption.
Being male is associated with 1.9 times increase in likelihood of high alcohol consumption compared to being female.
Being in a romantic relationship is not significantly associated with a change in likelihood of high alcohol consumption, so the hypothesis was wrong.
Fit a logistic model with the explanatory variables that were statistically significantly associated to high or low alcohol consumption:
m <- glm(high_use ~ goout + studytime + sex, data = alc, family = "binomial")
Prediction performance of the model:
probability <- predict(m, type="response")
alc <- mutate(alc, probability=probability)
alc <- mutate(alc, prediction=probability > 0.5)
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 250 18
## TRUE 76 38
The model is better at predicting low alcohol consumption than high alcohol consumption.
Visualizing the class, the predicted probabilities, and the predicted class:
g <- ggplot(alc, aes(x = probability, y = high_use, col=prediction))
g + geom_point()
Calculate the total proportion of misclassified individuals using the regression model. Use a simple guessing strategy where everyone is classified to be in the most prevalent class:
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2460733
loss_func(class = alc$high_use, prob = 0)
## [1] 0.2984293
Using the regression model, 24.6% of the individuals are misclassified, compared to 29.8 % of misclassified individuals if guessing that everybody belongs to the low use of alcohol class. The model seems to provide modest improvement to the simple guess of the most prevalent class.
Perform 10-fold cross-validation of the model to estimate the performance of the model on unseen data. The performance of the model is measured with proportion of misclassified individuals. The mean prediction error in the test set:
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv$delta[1]
## [1] 0.2460733
The mean prediction error in the test set is 0.25, marginally better than the performance of the model in the DataCamp exercises, with a mean prediction error of 0.26 in the test set.
Construct models with different number of predictors and calculate the test set and training set prediction errors:
predictors <- c('school', 'sex', 'age', 'address', 'famsize', 'Pstatus', 'Medu', 'Fedu', 'Mjob', 'Fjob', 'reason', 'nursery', 'internet', 'guardian', 'traveltime', 'studytime', 'failures', 'schoolsup', 'famsup', 'paid', 'activities', 'higher', 'romantic', 'famrel', 'freetime', 'goout', 'health', 'absences', 'G1', 'G2', 'G3')
# Fit several models and record the test and traingin errors
# 1) Use all of the predictors.
# 2) Drop one predictor and fit a new model.
# 3) Continue until only one predictor is left in the model.
test_error <- numeric(length(predictors))
training_error <- numeric(length(predictors))
for(i in length(predictors):1) {
model_formula <- paste0("high_use ~ ", paste(predictors[1:i], collapse = " + "))
glmfit <- glm(model_formula, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
test_error[i] <- cv$delta[1]
training_error[i] <- loss_func(alc$high_use, predict(glmfit,type="response"))
}
data_error <- rbind(data.frame(n_predictors=1:length(predictors),
prediction_error=test_error,
type = "test error"),
data.frame(n_predictors=1:length(predictors),
prediction_error=training_error,
type = "training error"))
g <- ggplot(data_error, aes(x = n_predictors, y = prediction_error, col=type))
g + geom_point()
Load Boston dataset from the MASS package:
library(corrplot)
library(dplyr)
library(MASS)
data("Boston")
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
The dataset has 14 variables and 506 observations. Full details can be found in the dataset’s documentation.
Summary of the variables in the dataset:
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
Plot the variables and explore the data:
library(GGally)
library(ggplot2)
p <- ggpairs(Boston, mapping = aes(alpha=0.3),
lower = list(combo = wrap("facethist", bins = 20)))
p
Correlation of the variables:
cor(Boston) %>% corrplot(method = "circle", type = "upper", cl.pos = "b", tl.pos = "d")
Scaling the dataset so that the average is \(0\) and standard deviation is \(1\):
\[x_{scaled}=\frac{x - \mu_{x}}{\sigma_{x}}\] where \(\mu_{x}\) is the mean of \(x\) and \(\sigma_{x}\) is the standard deviation of \(x\).
boston_scaled <- scale(Boston) %>% as.data.frame()
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
Create a factor variable crime from the crim by dividing crim by quartiles to “low”, “med_low”, “med_high” and “high” categories:
bins <- quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE,
label=c("low", "med_low", "med_high", "high"))
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
Divide the dataset to training and test sets so that 80% belongs to the training set and 20% belongs to the test set:
set.seed(1)
train.idx <- sample(nrow(boston_scaled), size = 0.8 * nrow(boston_scaled))
train <- boston_scaled[train.idx,]
test <- boston_scaled[-train.idx,]
Fit the linear discriminant analysis (LDA) on the training set using the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables:
lda.fit <- lda(crime ~ ., data = train)
The LDA bi-plot:
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)) {
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]],
col = color, length = arrow_heads)
text(myscale * heads[,choices],
labels = row.names(heads),
cex = tex, col = color, pos = 3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 2)
Use the fitted LDA model to predict the categorical crime rate in the test set. Cross-tabulate the observed classes and the predicted classes:
correct_classes <- test$crime
test <- dplyr::select(test, -crime)
lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 15 11 1 0
## med_low 8 20 2 0
## med_high 1 9 16 0
## high 0 0 0 19
Model seems to perform perfectly at predicting the “high” class and predicts the other classes reasonably well. The prediction accuracy is worst for the “low” class. The model misclassifies a large proportion of the “low” observations as “med_low”.
Reload the Boston dataset and standardize it as above. Calculate the Euclidean distance between the observations:
data("Boston")
boston_scaled <- scale(Boston) %>% as.data.frame()
dist_eu <- dist(boston_scaled)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
Run the k-means algorithm with 3 clusters and visualize the results:
# seeded above
km <-kmeans(boston_scaled, centers = 3)
pairs(boston_scaled, col = km$cluster)
Calculate the total of within cluster sum of squares (TWCSS) when the number of cluster changes from 1 to 10:
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')
The optimal number of clusters is when the total WCSS drops radically. Based on the graph, 2 seems to be the optimal number.
Perform k-means with 2 clusters and visualize the results:
km <-kmeans(boston_scaled, centers = 2)
pairs(boston_scaled, col = km$cluster)
Perform k-means clustering with 3 clusters on the scaled Boston dataset. Use the cluster assignments as the target variable for LDA analysis:
km <-kmeans(boston_scaled, centers = 3)
boston_scaled$kmeans_cluster <- km$cluster
lda.fit <- lda(kmeans_cluster ~ ., data = boston_scaled)
The LDA bi-plot:
plot(lda.fit, dimen = 2, col = boston_scaled$kmeans_cluster, pch = boston_scaled$kmeans_cluster)
lda.arrows(lda.fit, myscale = 2)
Based on the biplot, the most influential linear separators are age, dis, rad, and tax.